DE eng

Search in the Catalogues and Directories

Page: 1 2 3 4 5...9
Hits 1 – 20 of 163

1
Probing for the Usage of Grammatical Number ...
BASE
Show details
2
Estimating the Entropy of Linguistic Distributions ...
BASE
Show details
3
A Latent-Variable Model for Intrinsic Probing ...
BASE
Show details
4
On Homophony and Rényi Entropy ...
BASE
Show details
5
On Homophony and Rényi Entropy ...
BASE
Show details
6
On Homophony and Rényi Entropy ...
BASE
Show details
7
Towards Zero-shot Language Modeling ...
BASE
Show details
8
Differentiable Generative Phonology ...
BASE
Show details
9
Finding Concept-specific Biases in Form--Meaning Associations ...
BASE
Show details
10
Searching for Search Errors in Neural Morphological Inflection ...
BASE
Show details
11
Applying the Transformer to Character-level Transduction ...
Wu, Shijie; Cotterell, Ryan; Hulden, Mans. - : ETH Zurich, 2021
Abstract: The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks. Yet for character-level transduction tasks, e.g. morphological inflection generation and historical text normalization, there are few works that outperform recurrent models using the transformer. In an empirical study, we uncover that, in contrast to recurrent sequence-to-sequence models, the batch size plays a crucial role in the performance of the transformer on character-level tasks, and we show that with a large enough batch size, the transformer does indeed outperform recurrent models. We also introduce a simple technique to handle feature-guided character-level transduction that further improves performance. With these insights, we achieve state-of-the-art performance on morphological inflection and historical text normalization. We also show that the transformer outperforms a strong baseline on two other character-level transduction tasks: grapheme-to-phoneme ... : Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume ...
URL: https://dx.doi.org/10.3929/ethz-b-000518998
http://hdl.handle.net/20.500.11850/518998
BASE
Hide details
12
Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models ...
BASE
Show details
13
Probing as Quantifying Inductive Bias ...
BASE
Show details
14
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
15
Revisiting the Uniform Information Density Hypothesis ...
BASE
Show details
16
Conditional Poisson Stochastic Beams ...
BASE
Show details
17
Examining the Inductive Bias of Neural Language Models with Artificial Languages ...
BASE
Show details
18
Modeling the Unigram Distribution ...
BASE
Show details
19
Language Model Evaluation Beyond Perplexity ...
BASE
Show details
20
Differentiable Subset Pruning of Transformer Heads ...
BASE
Show details

Page: 1 2 3 4 5...9

Catalogues
0
0
0
0
0
0
1
Bibliographies
0
0
0
0
0
0
0
0
0
Linked Open Data catalogues
0
Online resources
0
0
0
0
Open access documents
162
0
0
0
0
© 2013 - 2024 Lin|gu|is|tik | Imprint | Privacy Policy | Datenschutzeinstellungen ändern